A proximal-Newton method for unconstrained convex optimization in Hilbert spaces
نویسندگان
چکیده
We propose and study the iteration-complexity of a proximal-Newton method for finding approximate solutions of the problem of minimizing a twice continuously differentiable convex function on a (possibly infinite dimensional) Hilbert space. We prove global convergence rates for obtaining approximate solutions in terms of function/gradient values. Our main results follow from an iteration-complexity study of an (large-step) inexact proximal point method for solving convex minimization problems. 2000 Mathematics Subject Classification: 90C25, 90C30, 47H05.
منابع مشابه
On the Behavior of Damped Quasi-Newton Methods for Unconstrained Optimization
We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...
متن کاملA Hybrid Proximal Point Algorithm for Resolvent operator in Banach Spaces
Equilibrium problems have many uses in optimization theory and convex analysis and which is why different methods are presented for solving equilibrium problems in different spaces, such as Hilbert spaces and Banach spaces. The purpose of this paper is to provide a method for obtaining a solution to the equilibrium problem in Banach spaces. In fact, we consider a hybrid proximal point algorithm...
متن کاملA Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملGlobal Convergence of a Closed-Loop Regularized Newton Method for Solving Monotone Inclusions in Hilbert Spaces
We analyze the global convergence properties of some variants of regularized continuous Newton methods for convex optimization and monotone inclusions in Hilbert spaces. The regularization term is of LevenbergMarquardt type and acts in an open-loop or closed-loop form. In the open-loop case the regularization term may be of bounded variation.
متن کاملA dynamic approach to a proximal-Newton method for monotone inclusions in Hilbert spaces, with complexity O(1/n)
In a Hilbert setting, we introduce a new dynamical system and associated algorithms for solving monotone inclusions by rapid methods. Given a maximal monotone operator A, the evolution is governed by the time dependent operator I− (I+λ(t)A)−1, where the positive control parameter λ(t) tends to infinity as t→ +∞. The tuning of λ(·) is done in a closed-loop way, by resolution of the algebraic equ...
متن کامل